Chapter 2: Univariate Modelling Techniques

1 Introduction

Univariate models belong to a class of models that require only one variable to be formulated. They use information from the past values of a variable to forecast its future values.

Note

Disadvantage: Univariate techniques cannot provide explanations for the possible causes of change in the variable of interest — they only describe what has happened, not why.

The methods covered in this chapter are:

Method Best For
Naive Forecast No discernible pattern; benchmark
Naive with Trend Series with slow, steady trend
Average Change Short series with consistent changes
Average Percent Change Constant percentage growth
Single Exponential Smoothing Stable series, no trend
Double Exponential Smoothing Linear trend, short series
Holt’s Method Linear trend, more flexible
ARRES Time-varying alpha needed
Holt-Winters Trend + seasonality
Decomposition Forecasting Clear trend + seasonal components

2 Naive Methods

2.1 Naive Forecast

The naive forecast strongly believes that what happens today will happen again tomorrow (or any other time in the future).

\[F_{t+m} = y_t\]

where \(m\) is the number of steps ahead.

When to use:

  • Works best when the historical data series contains no discernible pattern.
  • Performs well in series that exhibit slow change in fluctuations.
  • Sudden changes in the current data would severely affect forecast accuracy.
  • Serves as a benchmark — other methods should outperform it.

2.1.1 Example

Compute the estimated values and forecast for 2024 using naive forecast. Then compute the MSE.

Show R Code
F_naive <- c(NA, price[-n_p])
e2_naive <- (price - F_naive)^2
mse_naive <- mean(e2_naive, na.rm = TRUE)

knitr::kable(
  data.frame(Year = year, Price = price,
             F_t = F_naive, e2 = e2_naive),
  col.names = c("Year", "Price (RM)", "$F_t$", "$e_t^2$"),
  align = "c", na_mark = "—"
)
Table 1: Naive forecast <U+2014> worked example
Year Price (RM) \(F_t\) \(e_t^2\)
2018 6 NA NA
2019 8 6 4
2020 9 8 1
2021 12 9 9
2022 15 12 9
2023 16 15 1
Show R Code
cat("Forecast for 2024 (Naive):", price[n_p], "\n")
Forecast for 2024 (Naive): 16 
Show R Code
cat("MSE:", round(mse_naive, 4), "\n")
MSE: 4.8 
Show R Code
ggplot(data.frame(Year = year, Actual = price, Forecast = F_naive),
       aes(x = Year)) +
  geom_line(aes(y = Actual,   colour = "Actual"),         linewidth = 0.9) +
  geom_point(aes(y = Actual,  colour = "Actual"),         size = 2.5) +
  geom_line(aes(y = Forecast, colour = "Naive Forecast"),
            linewidth = 0.9, linetype = "dashed", na.rm = TRUE) +
  geom_point(aes(y = Forecast, colour = "Naive Forecast"),
             size = 2.5, na.rm = TRUE) +
  scale_colour_manual(values = c("Actual" = "steelblue",
                                 "Naive Forecast" = "#d73027")) +
  labs(title = "Naive Forecast", x = "Year", y = "Price (RM)", colour = NULL) +
  theme_ts()
Figure 1: Naive forecast vs. actual values

Excel Tutorial: Naive Forecast in Excel


2.2 Naive with Trend

The naive method is modified to account for the trend component.

\[F_{t+1} = y_t \times \frac{y_t}{y_{t-1}}\]

Condition Interpretation
\(y_t > y_{t-1}\) Upward trend
\(y_t < y_{t-1}\) Downward trend
Note

This method is highly sensitive to changes in actual values.

2.2.1 Example

Show R Code
# F_{t+1} = y_t * (y_t / y_{t-1})  -- needs TWO prior values, so first two rows are NA
growth  <- c(NA, price[-1] / price[-n_p])          # growth rate at each t
F_trend <- c(NA, NA, sapply(3:n_p, function(i)     # F(2019) = NA: needs y(2017) which is absent
               price[i-1] * (price[i-1] / price[i-2])))
e2_trend <- (price - F_trend)^2
mse_trend <- mean(e2_trend, na.rm = TRUE)

knitr::kable(
  data.frame(Year = year, Price = price,
             Growth = round(c(NA, price[-1]/price[-n_p]), 4),
             F_t = round(F_trend, 4), e2 = round(e2_trend, 4)),
  col.names = c("Year", "Price (RM)", "$y_t/y_{t-1}$", "$F_t$", "$e_t^2$"),
  align = "c", na_mark = "—"
)
Table 2: Naive with trend <U+2014> worked example
Year Price (RM) \(y_t/y_{t-1}\) \(F_t\) \(e_t^2\)
2018 6 NA NA NA
2019 8 1.3333 NA NA
2020 9 1.1250 10.6667 2.7778
2021 12 1.3333 10.1250 3.5156
2022 15 1.2500 16.0000 1.0000
2023 16 1.0667 18.7500 7.5625
Show R Code
last_g     <- price[n_p] / price[n_p - 1]
f2024_trend <- price[n_p] * last_g
cat("Forecast for 2024 (Naive with Trend):", round(f2024_trend, 4), "\n")
Forecast for 2024 (Naive with Trend): 17.0667 
Show R Code
cat("MSE:", round(mse_trend, 4), "\n")
MSE: 3.714 

Excel Tutorial: Naive with Trend in Excel


3 Methods of Averages

3.1 Average Change

The forecast equals the actual value in the current period plus the average of the absolute changes.

\[F_{t+m} = y_t + \frac{(y_t - y_{t-1}) + (y_{t-1} - y_{t-2})}{2}\]

When to use:

  • Less influenced by historical data; responds quickly to changes.
  • Most useful when historical changes are of similar size.
  • Suitable for short and intermediate data series with consistent changes.

3.1.1 Example

Show R Code
chg  <- c(NA, diff(price))
# F_{t+1} = y_t + [(y_t-y_{t-1}) + (y_{t-1}-y_{t-2})]/2
# Needs y_{t-2}: F(2019) needs y(2017), F(2020) needs y(2017) -> both NA
# First computable forecast is F(2021) at i=4
F_ac  <- c(NA, NA, NA, sapply(4:n_p, function(i)
             price[i-1] + (chg[i-1] + chg[i-2]) / 2))
e2_ac <- (price - F_ac)^2
mse_ac <- mean(e2_ac, na.rm = TRUE)

knitr::kable(
  data.frame(Year = year, Price = price,
             Change = round(chg, 4), F_t = round(F_ac, 4),
             e2 = round(e2_ac, 4)),
  col.names = c("Year", "Price (RM)", "$y_t - y_{t-1}$", "$F_t$", "$e_t^2$"),
  align = "c", na_mark = "—"
)
Table 3: Average change <U+2014> worked example
Year Price (RM) \(y_t - y_{t-1}\) \(F_t\) \(e_t^2\)
2018 6 NA NA NA
2019 8 2 NA NA
2020 9 1 NA NA
2021 12 3 10.5 2.25
2022 15 3 14.0 1.00
2023 16 1 18.0 4.00
Show R Code
f2024_ac <- price[n_p] + (chg[n_p] + chg[n_p - 1]) / 2
cat("Forecast for 2024 (Average Change):", round(f2024_ac, 4), "\n")
Forecast for 2024 (Average Change): 18 
Show R Code
cat("MSE:", round(mse_ac, 4), "\n")
MSE: 2.4167 

Excel Tutorial: Average Change in Excel


3.2 Average Percent Change

\[F_{t+m} = y_t + \frac{\left(\frac{y_t - y_{t-1}}{y_{t-1}}\right) + \left(\frac{y_{t-1} - y_{t-2}}{y_{t-2}}\right)}{2} \times y_t\]

Important

May be unsuitable for forecasting beyond one or two periods due to compounding effects.

3.2.1 Example

Show R Code
pct   <- c(NA, diff(price) / price[-n_p])
# F_{t+1} = y_t * (1 + (p_t + p_{t-1})/2)
# p_{t-1} needs y_{t-2}: F(2019) and F(2020) both require y(2017) -> NA
# First computable forecast is F(2021) at i=4
F_apc <- c(NA, NA, NA, sapply(4:n_p, function(i)
             price[i-1] * (1 + (pct[i-1] + pct[i-2]) / 2)))
e2_apc  <- (price - F_apc)^2
mse_apc <- mean(e2_apc, na.rm = TRUE)

knitr::kable(
  data.frame(Year = year, Price = price,
             PctChg = round(pct * 100, 2), F_t = round(F_apc, 4),
             e2 = round(e2_apc, 4)),
  col.names = c("Year", "Price (RM)", "% Change", "$F_t$", "$e_t^2$"),
  align = "c", na_mark = "—"
)
Table 4: Average percent change <U+2014> worked example
Year Price (RM) % Change \(F_t\) \(e_t^2\)
2018 6 NA NA NA
2019 8 33.33 NA NA
2020 9 12.50 NA NA
2021 12 33.33 11.0625 0.8789
2022 15 25.00 14.7500 0.0625
2023 16 6.67 19.3750 11.3906
Show R Code
f2024_apc <- price[n_p] * (1 + (pct[n_p] + pct[n_p - 1]) / 2)
cat("Forecast for 2024 (Avg Percent Change):", round(f2024_apc, 4), "\n")
Forecast for 2024 (Avg Percent Change): 18.5333 
Show R Code
cat("MSE:", round(mse_apc, 4), "\n")
MSE: 4.1107 

Excel Tutorial: Average Percent Change in Excel


4 Exponential Smoothing Techniques

4.1 Single Exponential Smoothing

Also known as simple exponential smoothing (SES). Requires only one parameter \(\alpha\).

\[F_{t+1} = \alpha y_t + (1-\alpha)F_t, \quad 0 \leq \alpha \leq 1\]

Advantages over moving average:

  • Takes into account the most recent forecast.
  • Requires retention of only a limited amount of data.
Important

Disadvantage: Cannot forecast more than one-step ahead.

4.1.1 Weighted Average Interpretation — Recursive Derivation

Consider the one-step-ahead forecast:

\[F_{t+1} = \alpha y_t + (1-\alpha)F_t \tag{1}\]

Then,

\[F_t = \alpha y_{t-1} + (1-\alpha)F_{t-1} \tag{2}\]

Substitute Equation (2) into Equation (1):

\[F_{t+1} = \alpha y_t + (1-\alpha)\left[\alpha y_{t-1} + (1-\alpha)F_{t-1}\right]\]

\[F_{t+1} = \alpha y_t + \alpha(1-\alpha)y_{t-1} + (1-\alpha)^2 F_{t-1} \tag{3}\]

Let,

\[F_{t-1} = \alpha y_{t-2} + (1-\alpha)F_{t-2} \tag{4}\]

Substitute Equation (4) into Equation (3):

\[F_{t+1} = \alpha y_t + \alpha(1-\alpha)y_{t-1} + (1-\alpha)^2\left[\alpha y_{t-2} + (1-\alpha)F_{t-2}\right]\]

\[F_{t+1} = \alpha y_t + \alpha(1-\alpha)y_{t-1} + \alpha(1-\alpha)^2 y_{t-2} + (1-\alpha)^3 F_{t-2}\]

Continuing the recursive substitution for \(F_{t-2},\ F_{t-3}\) and so forth yields the final weighted average form:

\[F_{t+1} = \alpha y_t + \alpha(1-\alpha)y_{t-1} + \alpha(1-\alpha)^2 y_{t-2} + \cdots + \alpha(1-\alpha)^{t-1}y_1 + (1-\alpha)^t F_1\]

This shows that \(F_{t+1}\) is a weighted average of all past observations \(y_t, y_{t-1}, y_{t-2}, \ldots, y_1\) with respective weights:

\[\alpha,\quad \alpha(1-\alpha),\quad \alpha(1-\alpha)^2,\quad \alpha(1-\alpha)^3,\quad \ldots,\quad \alpha(1-\alpha)^{t-1},\quad (1-\alpha)^t\]

The weights decrease exponentially as we move further into the past — this is the damping effect. The most recent observation \(y_t\) provides the largest contribution; successive earlier observations provide smaller and smaller contributions.

Note

Larger \(\alpha\) → faster damping → past observations lose influence quickly. Smaller \(\alpha\) → slower damping → past observations retain influence longer.

Example: \(\alpha = 0.8\)

The weights are \(0.8,\ 0.16,\ 0.032,\ 0.0064,\ 0.00128\) for \(y_t,\ y_{t-1},\ y_{t-2},\ y_{t-3},\ y_{t-4}\) respectively.

\[F_{t+1} = 0.8y_t + 0.16y_{t-1} + 0.032y_{t-2} + 0.0064y_{t-3} + 0.0012y_{t-4} + \cdots\]

Periods beyond \(t-3\) have almost minimal effect on the current estimate.

Example: \(\alpha = 0.2\)

The weights are \(0.2,\ 0.16,\ 0.128,\ 0.1024\) for \(y_t,\ y_{t-1},\ y_{t-2},\ y_{t-3}\) respectively.

\[F_{t+1} = 0.2y_t + 0.16y_{t-1} + 0.128y_{t-2} + 0.1024y_{t-3} + \cdots\]

The weights diminish slowly compared to a large value of \(\alpha\).

\(\alpha\) Behaviour
Near 1 Reacts quickly; heavy weight on recent observation; fast damping
Near 0 Smooth; weights diminish slowly; more historical influence

Comparison with 3-period Moving Average:

\[\hat{y}_2 = \frac{y_1 + y_2 + y_3}{3} = \frac{1}{3}y_1 + \frac{1}{3}y_2 + \frac{1}{3}y_3\]

The MA uses constant equal weights of \(\frac{1}{3}\) — it does not distinguish between the most recent and older observations, unlike SES.

4.1.2 Choosing \(\alpha\)

Method Description
Subjective Near 1 for rapidly changing data; small for stable series
Objective Minimise MSE/MAPE/MAE over the estimation set

4.1.3 Steps

  1. Set initial value: \(F_1 = y_1\) (or average of first five observations).
  2. Find the best \(\alpha\) that minimises MSE.
  3. Calculate fitted values.
  4. Compute squared errors and MSE.

4.1.4 Example

Show R Code
alpha    <- 0.8
F_ses    <- numeric(n_p); F_ses[1] <- price[1]
for (i in 2:n_p)
  F_ses[i] <- alpha * price[i-1] + (1 - alpha) * F_ses[i-1]

e2_ses  <- (price - F_ses)^2
mse_ses <- mean(e2_ses[-1])

knitr::kable(
  data.frame(Year = year, Price = price,
             F_t = round(F_ses, 4), e2 = round(e2_ses, 4)),
  col.names = c("Year", "Price (RM)", "$F_t$", "$e_t^2$"),
  align = "c"
)
Table 5: Single exponential smoothing (<U+03B1> = 0.8) <U+2014> worked example
Year Price (RM) \(F_t\) \(e_t^2\)
2018 6 6.0000 0.0000
2019 8 6.0000 4.0000
2020 9 7.6000 1.9600
2021 12 8.7200 10.7584
2022 15 11.3440 13.3663
2023 16 14.2688 2.9971
Show R Code
f2024_ses <- alpha * price[n_p] + (1 - alpha) * F_ses[n_p]
cat("Forecast for 2024 (SES, α = 0.8):", round(f2024_ses, 4), "\n")
Forecast for 2024 (SES, <U+03B1> = 0.8): 15.6538 
Show R Code
cat("MSE:", round(mse_ses, 4), "\n")
MSE: 6.6164 

4.1.5 SES using R — Malaysia CPI Data

Show R Code
SE_est <- ses(est_part)
summary(SE_est)

Forecast method: Simple exponential smoothing

Model Information:
Simple exponential smoothing 

Call:
ses(y = est_part)

  Smoothing parameters:
    alpha = 0.9999 

  Initial states:
    l = 21.3004 

  sigma:  2.0227

     AIC     AICc      BIC 
270.0030 270.5247 275.7391 

Error measures:
                   ME     RMSE      MAE      MPE     MAPE      MASE      ACF1
Training set 1.542145 1.981825 1.558155 2.974211 3.045736 0.9800978 0.4849583

Forecasts:
     Point Forecast    Lo 80    Hi 80    Lo 95    Hi 95
2010       98.39994 95.80776 100.9921 94.43554 102.3643
2011       98.39994 94.73422 102.0657 92.79371 104.0062
2012       98.39994 93.91044 102.8894 91.53385 105.2660
2013       98.39994 93.21596 103.5839 90.47173 106.3282
2014       98.39994 92.60410 104.1958 89.53597 107.2639
2015       98.39994 92.05094 104.7489 88.68998 108.1099
2016       98.39994 91.54225 105.2576 87.91201 108.8879
2017       98.39994 91.06878 105.7311 87.18789 109.6120
2018       98.39994 90.62408 106.1758 86.50779 110.2921
2019       98.39994 90.20347 106.5964 85.86452 110.9354
Show R Code
SE_eva <- ses(eva_part, alpha = 0.999)
summary(SE_eva)

Forecast method: Simple exponential smoothing

Model Information:
Simple exponential smoothing 

Call:
ses(y = eva_part, alpha = 0.999)

  Smoothing parameters:
    alpha = 0.999 

  Initial states:
    l = 100.002 

  sigma:  2.8618

     AIC     AICc      BIC 
62.50994 63.70994 63.63984 

Error measures:
                   ME     RMSE      MAE      MPE     MAPE      MASE        ACF1
Training set 2.093929 2.632431 2.309504 1.825748 2.005296 0.9238016 0.007182097

Forecasts:
     Point Forecast    Lo 80    Hi 80    Lo 95    Hi 95
2023       127.1959 123.5284 130.8634 121.5870 132.8048
2024       127.1959 122.0119 132.3799 119.2676 135.1242
2025       127.1959 120.8479 133.5439 117.4874 136.9044
2026       127.1959 119.8664 134.5254 115.9864 138.4054
2027       127.1959 119.0017 135.3901 114.6640 139.7278
2028       127.1959 118.2199 136.1719 113.4683 140.9235
2029       127.1959 117.5010 136.8908 112.3688 142.0230
2030       127.1959 116.8318 137.5600 111.3453 143.0465
2031       127.1959 116.2032 138.1886 110.3840 144.0077
2032       127.1959 115.6087 138.7831 109.4749 144.9169
Show R Code
autoplot(cpits) + ylab("CPI (2010 = 100)") + xlab("Year") +
  autolayer(forecast(ses(cpits, alpha = 0.999), h = 5),
            series = "SES Forecast", PI = TRUE) +
  labs(title = "Malaysia CPI: Single Exponential Smoothing Forecast") +
  theme_ts()
Figure 2: SES forecast for Malaysia CPI (α = 0.999)

4.2 Double Exponential Smoothing

Useful for series exhibiting a linear trend. Generates multiple-step-ahead forecasts.

\[S_t = \alpha y_t + (1-\alpha)S_{t-1}\] \[S'_t = \alpha S_t + (1-\alpha)S'_{t-1}\] \[a_t = 2S_t - S'_t\] \[b_t = \frac{\alpha}{1-\alpha}(S_t - S'_t)\] \[F_{t+m} = a_t + b_t \times m\]

4.2.1 Example

Show R Code
alpha <- 0.8
S <- numeric(n_p); Sp <- numeric(n_p)
S[1] <- price[1]; Sp[1] <- price[1]
for (i in 2:n_p) {
  S[i]  <- alpha * price[i]  + (1 - alpha) * S[i-1]
  Sp[i] <- alpha * S[i]      + (1 - alpha) * Sp[i-1]
}
a_t   <- 2 * S - Sp
b_t   <- (alpha / (1 - alpha)) * (S - Sp)
F_des <- c(NA, a_t[-n_p] + b_t[-n_p] * 1)
e2_des <- (price - F_des)^2
mse_des <- mean(e2_des, na.rm = TRUE)

knitr::kable(
  data.frame(Year = year, Price = price,
             S = round(S, 4), Sp = round(Sp, 4),
             a = round(a_t, 4), b = round(b_t, 4),
             F_t = round(F_des, 4), e2 = round(e2_des, 4)),
  col.names = c("Year", "Price", "$S_t$", "$S'_t$",
                "$a_t$", "$b_t$", "$F_t$", "$e_t^2$"),
  align = "c", na_mark = "—"
)
Table 6: Double exponential smoothing (<U+03B1> = 0.8) <U+2014> worked example
Year Price \(S_t\) \(S'_t\) \(a_t\) \(b_t\) \(F_t\) \(e_t^2\)
2018 6 6.0000 6.0000 6.0000 0.0000 NA NA
2019 8 7.6000 7.2800 7.9200 1.2800 6.000 4.0000
2020 9 8.7200 8.4320 9.0080 1.1520 9.200 0.0400
2021 12 11.3440 10.7616 11.9264 2.3296 10.160 3.3856
2022 15 14.2688 13.5674 14.9702 2.8058 14.256 0.5535
2023 16 15.6538 15.2365 16.0710 1.6691 17.776 3.1542
Show R Code
f2024_des <- a_t[n_p] + b_t[n_p] * 1
cat("Forecast for 2024 (DES, α = 0.8, m = 1):", round(f2024_des, 4), "\n")
Forecast for 2024 (DES, <U+03B1> = 0.8, m = 1): 17.7402 
Show R Code
cat("MSE:", round(mse_des, 4), "\n")
MSE: 2.2267 

Excel Tutorial: Double Exponential Smoothing in Excel (α = 0.6)


4.3 Holt’s Method

Suitable for linear trend with more flexibility than DES — it uses two separate smoothing constants for level (\(\alpha\)) and slope (\(\beta\)).

Level: \(L_t = \alpha y_t + (1-\alpha)(L_{t-1} + T_{t-1})\)

Trend: \(T_t = \beta(L_t - L_{t-1}) + (1-\beta)T_{t-1}\)

Forecast: \(F_{t+m} = L_t + T_t \times m\)

where \(0 \leq \alpha \leq 1\), \(0 \leq \beta \leq 1\).

4.3.1 Example

Show R Code
alpha_h <- 0.6; beta_h <- 0.2
L <- numeric(n_p); Tr <- numeric(n_p)
L[1] <- price[1]; Tr[1] <- price[2] - price[1]
for (i in 2:n_p) {
  L[i]  <- alpha_h * price[i] + (1 - alpha_h) * (L[i-1] + Tr[i-1])
  Tr[i] <- beta_h * (L[i] - L[i-1]) + (1 - beta_h) * Tr[i-1]
}
F_holt <- c(NA, L[-n_p] + Tr[-n_p] * 1)
e2_h   <- (price - F_holt)^2
mse_h  <- mean(e2_h, na.rm = TRUE)

knitr::kable(
  data.frame(Year = year, Price = price,
             L = round(L, 4), Tr = round(Tr, 4),
             F_t = round(F_holt, 4), e2 = round(e2_h, 4)),
  col.names = c("Year", "Price (RM)", "$L_t$", "$T_t$", "$F_t$", "$e_t^2$"),
  align = "c", na_mark = "—"
)
Table 7: Holt’s method (<U+03B1> = 0.6, <U+03B2> = 0.2) <U+2014> worked example
Year Price (RM) \(L_t\) \(T_t\) \(F_t\) \(e_t^2\)
2018 6 6.0000 2.0000 NA NA
2019 8 8.0000 2.0000 8.0000 0.0000
2020 9 9.4000 1.8800 10.0000 1.0000
2021 12 11.7120 1.9664 11.2800 0.5184
2022 15 14.4714 2.1250 13.6784 1.7466
2023 16 16.2385 2.0534 16.5964 0.3556
Show R Code
f2024_holt <- L[n_p] + Tr[n_p] * 1
cat("Forecast for 2024 (Holt, α = 0.6, β = 0.2):", round(f2024_holt, 4), "\n")
Forecast for 2024 (Holt, <U+03B1> = 0.6, <U+03B2> = 0.2): 18.292 
Show R Code
cat("MSE:", round(mse_h, 4), "\n")
MSE: 0.7241 

4.3.2 Holt’s Method using R — Malaysia CPI Data

Show R Code
Holt_est <- holt(est_part)
summary(Holt_est)

Forecast method: Holt's method

Model Information:
Holt's method 

Call:
holt(y = est_part)

  Smoothing parameters:
    alpha = 0.9922 
    beta  = 0.3708 

  Initial states:
    l = 21.2128 
    b = -0.0636 

  sigma:  1.1718

     AIC     AICc      BIC 
217.2861 218.6498 226.8462 

Error measures:
                    ME    RMSE       MAE       MPE     MAPE      MASE      ACF1
Training set 0.1243695 1.12395 0.8068117 0.4127585 1.756522 0.5074939 0.1005745

Forecasts:
     Point Forecast     Lo 80    Hi 80     Lo 95    Hi 95
2010       100.6626  99.16086 102.1643  98.36590 102.9593
2011       102.9050 100.36634 105.4436  99.02247 106.7875
2012       105.1474 101.51089 108.7839  99.58585 110.7089
2013       107.3898 102.57173 112.2078 100.02122 114.7583
2014       109.6322 103.54667 115.7176 100.32520 118.9391
2015       111.8745 104.43792 119.3112 100.50120 123.2479
2016       114.1169 105.24876 122.9851 100.55423 127.6796
2017       116.3593 105.98259 126.7361 100.48948 132.2292
2018       118.6017 106.64262 130.5608 100.31186 136.8916
2019       120.8441 107.23180 134.4564 100.02588 141.6624
Show R Code
Holt_eva <- holt(eva_part, alpha = 0.9922, beta = 0.3708)
summary(Holt_eva)

Forecast method: Holt's method

Model Information:
Holt's method 

Call:
holt(y = eva_part, alpha = 0.9922, beta = 0.3708)

  Smoothing parameters:
    alpha = 0.9922 
    beta  = 0.3708 

  Initial states:
    l = 97.4232 
    b = 2.5846 

  sigma:  2.0339

     AIC     AICc      BIC 
53.02336 55.69003 54.71821 

Error measures:
                      ME     RMSE      MAE         MPE     MAPE      MASE
Training set -0.02408487 1.692347 1.361851 -0.03265661 1.146471 0.5447402
                  ACF1
Training set 0.0400953

Forecasts:
     Point Forecast    Lo 80    Hi 80    Lo 95    Hi 95
2023       129.6481 127.0414 132.2547 125.6616 133.6345
2024       132.1166 127.7101 136.5230 125.3775 138.8557
2025       134.5851 128.2731 140.8971 124.9317 144.2384
2026       137.0536 128.6908 145.4164 124.2638 149.8433
2027       139.5221 128.9594 150.0847 123.3679 155.6763
2028       141.9906 129.0828 154.8983 122.2499 161.7313
2029       144.4591 129.0667 159.8515 120.9185 167.9997
2030       146.9276 128.9169 164.9383 119.3826 174.4726
2031       149.3961 128.6390 170.1532 117.6509 181.1414
2032       151.8646 128.2382 175.4911 115.7311 187.9982
Show R Code
autoplot(cpits) + ylab("CPI (2010 = 100)") + xlab("Year") +
  autolayer(forecast(holt(cpits, alpha = 0.9922, beta = 0.3708), h = 5),
            series = "Holt Forecast", PI = TRUE) +
  labs(title = "Malaysia CPI: Holt's Method Forecast") +
  theme_ts()
Figure 3: Holt’s method forecast for Malaysia CPI (α = 0.9922, β = 0.3708)

Excel Tutorial: Holt’s Method in Excel (α = 0.6, β = 0.2)


4.4 Adaptive Response Rate Exponential Smoothing (ARRES)

In SES, \(\alpha\) is constant. But structural changes in the data may make a fixed \(\alpha\) unrealistic. ARRES adapts \(\alpha\) at each period based on the ratio of smoothed error to smoothed absolute error.

\[F_{t+1} = \alpha_t y_t + (1-\alpha_t)F_t\]

\[\alpha_t = \left|\frac{E_t}{AE_t}\right|, \quad e_t = y_t - F_t\]

\[E_t = \beta e_t + (1-\beta)E_{t-1} \qquad \text{(smoothed average error)}\]

\[AE_t = \beta|e_t| + (1-\beta)AE_{t-1} \qquad \text{(smoothed absolute error)}\]

where \(0 < \beta < 1\).

4.4.1 Example

Show R Code
beta_a <- 0.2
F_ar   <- numeric(n_p); e_ar  <- numeric(n_p)
E_ar   <- numeric(n_p); AE_ar <- numeric(n_p)
alp_ar <- numeric(n_p)

alp_ar[1] <- 0.6; F_ar[1] <- price[1]
e_ar[1]   <- 0;   E_ar[1] <- 0; AE_ar[1] <- 0

for (i in 2:n_p) {
  F_ar[i]  <- alp_ar[i-1] * price[i-1] + (1 - alp_ar[i-1]) * F_ar[i-1]
  e_ar[i]  <- price[i] - F_ar[i]
  E_ar[i]  <- beta_a * e_ar[i]       + (1 - beta_a) * E_ar[i-1]
  AE_ar[i] <- beta_a * abs(e_ar[i])  + (1 - beta_a) * AE_ar[i-1]
  alp_ar[i] <- ifelse(AE_ar[i] == 0, 0, abs(E_ar[i] / AE_ar[i]))
}

e2_ar  <- e_ar^2
mse_ar <- mean(e2_ar[-1])

knitr::kable(
  data.frame(Year = year, Price = price,
             F_t = round(F_ar, 4), e = round(e_ar, 4),
             E = round(E_ar, 4), AE = round(AE_ar, 4),
             alpha = round(alp_ar, 4), e2 = round(e2_ar, 4)),
  col.names = c("Year", "Price (RM)", "$F_t$", "$e_t$",
                "$E_t$", "$AE_t$", "$\\alpha_t$", "$e_t^2$"),
  align = "c"
)
Table 8: ARRES (<U+03B1><U+2081> = 0.6, <U+03B2> = 0.2) <U+2014> worked example
Year Price (RM) \(F_t\) \(e_t\) \(E_t\) \(AE_t\) \(\alpha_t\) \(e_t^2\)
2018 6 6 0 0.0000 0.0000 0.6 0
2019 8 6 2 0.4000 0.4000 1.0 4
2020 9 8 1 0.5200 0.5200 1.0 1
2021 12 9 3 1.0160 1.0160 1.0 9
2022 15 12 3 1.4128 1.4128 1.0 9
2023 16 15 1 1.3302 1.3302 1.0 1
Show R Code
cat("MSE (ARRES, β = 0.2):", round(mse_ar, 4), "\n")
MSE (ARRES, <U+03B2> = 0.2): 4.8 

Excel Tutorial: ARRES in Excel (β = 0.2)


4.5 Holt-Winters Trend and Seasonality

When a seasonal component exists, Holt-Winters is the most suitable method. Two model variants:

4.5.1 Multiplicative Model

\[L_t = \alpha\frac{y_t}{S_{t-s}} + (1-\alpha)(L_{t-1}+T_{t-1})\] \[T_t = \beta(L_t-L_{t-1}) + (1-\beta)T_{t-1}\] \[S_t = \gamma\frac{y_t}{L_t} + (1-\gamma)S_{t-s}\] \[F_{t+m} = (L_t + T_t \times m)\cdot S_{t-s+m}\]

4.5.2 Additive Model

\[L_t = \alpha(y_t - S_{t-s}) + (1-\alpha)(L_{t-1}+T_{t-1})\] \[T_t = \beta(L_t-L_{t-1}) + (1-\beta)T_{t-1}\] \[S_t = \gamma(y_t - L_t) + (1-\gamma)S_{t-s}\] \[F_{t+m} = L_t + T_t \times m + S_{t-s+m}\]

where \(0\leq\alpha,\beta,\gamma\leq 1\); \(s\) = season length (\(s=4\) quarterly, \(s=12\) monthly).

4.5.3 Example — Quarterly Data (α = 0.6, β = 0.2, γ = 0.4)

Show R Code
hw_data <- data.frame(Year    = c(rep(2022,4), rep(2023,4)),
                      Quarter = rep(1:4, 2),
                      Price   = c(6, 8, 3, 1, 8, 10, 5, 3))
knitr::kable(hw_data, align = "c")
Table 9: Holt-Winters example <U+2014> quarterly price data
Year Quarter Price
2022 1 6
2022 2 8
2022 3 3
2022 4 1
2023 1 8
2023 2 10
2023 3 5
2023 4 3
Show R Code
hw_ts  <- ts(hw_data$Price, start = c(2022,1), frequency = 4)
hw_fit <- hw(hw_ts, seasonal = "additive",
             alpha = 0.6, beta = 0.2, gamma = 0.4)
summary(hw_fit)

Forecast method: Holt-Winters' additive method

Model Information:
Holt-Winters' additive method 

Call:
hw(y = hw_ts, seasonal = "additive", alpha = 0.6, beta = 0.2, 
    gamma = 0.4)

  Smoothing parameters:
    alpha = 0.6 
    beta  = 0.2 
    gamma = 0.4 

  Initial states:
    l = 4.5 
    b = 0.5 
    s = -3.5 -1.5 3.5 1.5

  sigma:  0.7929
Error measures:
                     ME      RMSE       MAE       MPE     MAPE     MASE
Training set -0.1462946 0.7929204 0.6676439 -11.58299 17.89702 0.333822
                  ACF1
Training set 0.1531588

Forecasts:
        Point Forecast    Lo 80     Hi 80      Lo 95     Hi 95
2024 Q1       9.321220 8.305051 10.337388  7.7671242 10.875315
2024 Q2      11.050246 9.748915 12.351576  9.0600325 13.040459
2024 Q3       6.147005 4.495927  7.798083  3.6218986  8.672111
2024 Q4       4.438229 2.385669  6.490788  1.2991102  7.577348
2025 Q1      10.759448 8.010154 13.508743  6.5547660 14.964131
2025 Q2      12.488474 9.294407 15.682542  7.6035694 17.373380
2025 Q3       7.585234 3.904515 11.265952  1.9560608 13.214406
2025 Q4       5.876458 1.671927 10.080989 -0.5538171 12.306733

4.5.4 Holt-Winters using R — Monthly Electricity Data

Show R Code
HW_est <- hw(est_elec)
summary(HW_est)

Forecast method: Holt-Winters' additive method

Model Information:
Holt-Winters' additive method 

Call:
hw(y = est_elec)

  Smoothing parameters:
    alpha = 0.9253 
    beta  = 4e-04 
    gamma = 1e-04 

  Initial states:
    l = 102.4914 
    b = 0.1712 
    s = -0.233 -2.2881 3.3367 -0.561 3.7136 1.0978
           -3.7209 3.5383 -0.1766 4.6854 -8.8695 -0.5228

  sigma:  3.8523

     AIC     AICc      BIC 
631.3403 640.3403 673.0642 

Error measures:
                     ME     RMSE      MAE         MPE     MAPE      MASE
Training set 0.01529911 3.475488 2.522562 -0.03638605 2.255034 0.4535252
                   ACF1
Training set 0.06272114

Forecasts:
         Point Forecast     Lo 80    Hi 80     Lo 95    Hi 95
Mar 2022       123.3920 118.45517 128.3289 115.84175 130.9423
Apr 2022       118.7020 111.97466 125.4294 108.41339 128.9907
May 2022       122.5889 114.45510 130.7227 110.14932 135.0285
Jun 2022       115.5018 106.17037 124.8333 101.23060 129.7730
Jul 2022       120.4914 110.09866 130.8841 104.59708 136.3857
Aug 2022       123.2787 111.92278 134.6347 105.91130 140.6462
Sep 2022       119.1760 106.93167 131.4203 100.44993 137.9020
Oct 2022       123.2463 110.17328 136.3193 103.25285 143.2397
Nov 2022       117.7928 103.94001 131.6456  96.60677 138.9789
Dec 2022       120.0196 105.42804 134.6111  97.70374 142.3354
Jan 2023       119.9012 104.60604 135.1964  96.50927 143.2932
Feb 2023       111.7258  95.75746 127.6941  87.30434 136.1472
Mar 2023       125.4525 108.83772 142.0674 100.04236 150.8627
Apr 2023       120.7626 103.52511 138.0000  94.40016 147.1249
May 2023       124.6494 106.81062 142.4882  97.36734 151.9315
Jun 2023       117.5623  99.14137 135.9833  89.38991 145.7347
Jul 2023       122.5519 103.56620 141.5376  93.51579 151.5880
Aug 2023       125.3392 105.80473 144.8738  95.46378 155.2147
Sep 2023       121.2365 101.16772 141.3052  90.54397 151.9290
Oct 2023       125.3068 104.71729 145.8963  93.81787 156.7957
Nov 2023       119.8533  98.75553 140.9511  87.58704 152.1196
Dec 2023       122.0801 100.48560 143.6746  89.05417 155.1060
Jan 2024       121.9617  99.88133 144.0421  88.19268 155.7307
Feb 2024       113.7863  91.23011 136.3424  79.28959 148.2830
Show R Code
HW_eva <- hw(eva_elec, alpha = 0.9522, beta = 0.0002, gamma = 0.0008)
summary(HW_eva)

Forecast method: Holt-Winters' additive method

Model Information:
Holt-Winters' additive method 

Call:
hw(y = eva_elec, alpha = 0.9522, beta = 2e-04, gamma = 8e-04)

  Smoothing parameters:
    alpha = 0.9522 
    beta  = 2e-04 
    gamma = 8e-04 

  Initial states:
    l = 123.1084 
    b = 0.238 
    s = -11.6018 -3.1742 -1.905 -2.8557 2.5453 -1.8175
           3.9752 4.0803 1.2517 6.6506 -0.5222 3.3732

  sigma:  2.9547

     AIC     AICc      BIC 
115.0885 175.0885 130.3631 

Error measures:
                      ME     RMSE       MAE          MPE      MAPE      MASE
Training set 0.006215554 1.543055 0.9930067 -0.004220897 0.7778301 0.2679319
                   ACF1
Training set -0.3706544

Forecasts:
         Point Forecast    Lo 80    Hi 80     Lo 95    Hi 95
Jan 2024       125.5331 121.7465 129.3198 119.74199 131.3243
Feb 2024       117.3436 112.1144 122.5729 109.34626 125.3410
Mar 2024       132.5567 126.2040 138.9093 122.84116 142.2722
Apr 2024       128.8994 121.5937 136.2051 117.72633 140.0725
May 2024       136.3103 128.1619 144.4586 123.84847 148.7721
Jun 2024       131.1494 122.2374 140.0614 117.51971 144.7791
Jul 2024       134.2160 124.6006 143.8315 119.51047 148.9216
Aug 2024       134.3490 124.0779 144.6201 118.64071 150.0573
Sep 2024       128.7944 117.9068 139.6820 112.14325 145.4456
Oct 2024       133.3954 121.9241 144.8666 115.85160 150.9391
Nov 2024       128.2324 116.2056 140.2592 109.83893 146.6259
Dec 2024       129.4212 116.8631 141.9793 110.21526 148.6271
Jan 2025       128.3900 115.3212 141.4588 108.40298 148.3770
Feb 2025       120.2005 106.6408 133.7602  99.46272 140.9383
Mar 2025       135.4135 121.3799 149.4472 113.95097 156.8761
Apr 2025       131.7563 117.2640 146.2485 109.59229 153.9202
May 2025       139.1671 124.2302 154.1041 116.32300 162.0113
Jun 2025       134.0063 118.6372 149.3753 110.50136 157.5111
Jul 2025       137.0729 121.2835 152.8623 112.92504 161.2207
Aug 2025       137.2059 121.0068 153.4049 112.43149 161.9802
Sep 2025       131.6513 115.0524 148.2501 106.26556 157.0369
Oct 2025       136.2522 119.2629 153.2415 110.26934 162.2351
Nov 2025       131.0893 113.7181 148.4604 104.52236 157.6561
Dec 2025       132.2781 114.5331 150.0230 105.13949 159.4166
Show R Code
autoplot(electricts) + ylab("Output Index") + xlab("Year") +
  autolayer(forecast(hw(electricts, alpha = 0.9522,
                        beta = 0.0002, gamma = 0.0008), h = 12),
            series = "HW Forecast", PI = TRUE) +
  labs(title = "Electricity Output Index: Holt-Winters Forecast") +
  theme_ts()
Figure 4: Holt-Winters 12-month forecast for electricity output index

Excel Tutorial: Holt-Winters in Excel (α = 0.6, β = 0.2, γ = 0.1)


5 Forecasting Using Decomposition Method

Assumes past patterns repeat in the future. Suitable for data with both trend and seasonal components. Cyclical and irregular components are excluded.

Relationship Forecast
Multiplicative \(F_t = \hat{T}_t \times S_t\)
Additive \(F_t = \hat{T}_t + S_t\)

5.0.1 Steps

Step Action
1 Compute trend (\(\hat{T}_t\)) using linear regression or moving average
2 Compute seasonal indices (\(S_t\))
3 Combine trend and seasonal index to obtain forecasts

5.0.2 Example

Seasonal indices for motorcycle production (’000), 2021–2023:

Quarter Q1 Q2 Q3 Q4
Seasonal Index 75 96 91 138

Trend: \(\hat{T}_t = 3.87 + 1.14t\), where \(t = 1\) for Q1 2021.

Forecast Q1 and Q4 of 2024:

Show R Code
SI  <- c(75, 96, 91, 138)
t_vals <- c(13, 16)   # Q1 2024 = t13, Q4 2024 = t16
T_hat  <- 3.87 + 1.14 * t_vals
F_vals <- T_hat * SI[c(1,4)] / 100

knitr::kable(
  data.frame(Period = c("Q1 2024","Q4 2024"),
             t = t_vals, T_hat = round(T_hat, 4),
             SI = SI[c(1,4)], Forecast = round(F_vals, 4)),
  col.names = c("Period","$t$","$\\hat{T}_t$","SI","Forecast ('000)"),
  align = "c"
)
Table 10: Decomposition forecast for Q1 and Q4 of 2024
Period \(t\) \(\hat{T}_t\) SI Forecast (’000)
Q1 2024 13 18.69 75 14.0175
Q4 2024 16 22.11 138 30.5118

5.0.3 Decomposition using R — Electricity Data

Show R Code
decomp_fit <- decompose(electricts, type = "multiplicative")
t_vec      <- seq_along(electricts)
lm_elec    <- lm(electricts ~ t_vec)

t_future   <- (length(electricts) + 1):(length(electricts) + 12)
T_future   <- predict(lm_elec, newdata = data.frame(t_vec = t_future))
SI_elec    <- decomp_fit$seasonal
SI_future  <- as.numeric(SI_elec[((t_future - 1) %% 12) + 1])
F_future   <- ts(T_future * SI_future, start = c(2024,1), frequency = 12)

autoplot(electricts) +
  autolayer(F_future, series = "Decomposition Forecast") +
  labs(title = "Electricity Output: Decomposition-Based Forecast (2024)",
       y = "Output Index", x = "Year") +
  theme_ts()
Figure 5: Decomposition-based 12-month forecast for electricity output index

6 Method Comparison

Show R Code
# Compute HW MSE from the quarterly example (hw_ts / hw_fit already in environment)
e2_hw   <- (hw_ts - fitted(hw_fit))^2
mse_hw  <- mean(e2_hw)   # all 8 quarterly observations

# n = number of errors contributing to each MSE
# (rows with NA forecast are excluded; SES/ARRES exclude init row t=1)
n_errors <- c(
  sum(!is.na(F_naive)),    # 5
  sum(!is.na(F_trend)),    # 4
  sum(!is.na(F_ac)),       # 3
  sum(!is.na(F_apc)),      # 3
  length(F_ses) - 1,       # 5  (exclude init)
  sum(!is.na(F_des)),      # 5
  sum(!is.na(F_holt)),     # 5
  length(F_ar) - 1,        # 5  (exclude init)
  length(hw_ts)            # 8  (quarterly data)
)

data.frame(
  Method    = c("Naive","Naive with Trend","Avg Change",
                "Avg % Change","SES","DES","Holt's","ARRES",
                "Holt-Winters *"),
  Trend     = c("No","Yes","No","No","No","Yes","Yes","No","Yes"),
  Seasonal  = c("No","No","No","No","No","No","No","No","Yes"),
  MultiStep = c("Yes","Yes","Yes","Yes","No","Yes","Yes","No","Yes"),
  n         = n_errors,
  MSE       = round(c(mse_naive, mse_trend, mse_ac, mse_apc,
                      mse_ses,   mse_des,   mse_h,  mse_ar,
                      mse_hw), 4)
) |>
  knitr::kable(col.names = c("Method","Handles Trend","Handles Seasonal",
                              "Multi-step","n","MSE"),
               align = "c")
Table 11: Summary comparison of univariate methods on the worked example
Method Handles Trend Handles Seasonal Multi-step n MSE
Naive No No Yes 5 4.8000
Naive with Trend Yes No Yes 4 3.7140
Avg Change No No Yes 3 2.4167
Avg % Change No No Yes 3 4.1107
SES No No No 5 6.6164
DES Yes No Yes 5 2.2267
Holt’s Yes No Yes 5 0.7241
ARRES No No No 5 4.8000
Holt-Winters * Yes Yes Yes 8 0.6287
Note

Note on Holt-Winters MSE: Holt-Winters requires seasonal data and therefore uses a different dataset (quarterly, \(n = 8\)) compared to all other methods (annual price data, \(n = 6\)). The MSE value of Holt-Winters marked with * should not be directly compared with other methods — different dataset means the MSE not directly comparable with the others.


7 Summary

  • Univariate models use only past values — no explanatory variables.
  • Naive (\(F_{t+m} = y_t\)) is the benchmark; all other methods should outperform it.
  • Naive with Trend adjusts for the growth rate; highly sensitive to recent changes.
  • Average Change / Percent Change average historical movements; suitable for short series.
  • SES uses exponentially decreasing weights with one parameter \(\alpha\); one-step-ahead only.
  • DES and Holt’s handle linear trends with multi-step capability; Holt’s uses two constants.
  • ARRES adapts \(\alpha\) over time — ideal when data behaviour changes.
  • Holt-Winters adds seasonal smoothing (\(\gamma\)) for trend + seasonal data.
  • Decomposition forecasting extracts trend and seasonal separately, then combines them.

8 References

Data: https://drive.google.com/drive/folders/1SlWHeWAAVXre62j9VC-nKDdKiVd1KaM3?usp=sharing